Skip to content

[Torch] Add aten.add.float op with folding and lowering to arith#4500

Open
BruceXinXin wants to merge 1 commit intollvm:mainfrom
BruceXinXin:bruce_add_float_lowering
Open

[Torch] Add aten.add.float op with folding and lowering to arith#4500
BruceXinXin wants to merge 1 commit intollvm:mainfrom
BruceXinXin:bruce_add_float_lowering

Conversation

@BruceXinXin
Copy link

Add aten.add.float op

This PR adds support for the aten::add.float : (float, float) -> (float) operator, complementing the existing aten.sub.float and aten.mul.float ops.

Changes

  • ODS definition (GeneratedTorchOps.td): New Torch_AtenAddFloatOp with two Torch_FloatType operands, custom assembly format, and folder support.
  • Constant folding (TorchOps.cpp): AtenAddFloatOp::fold using atenBinaryFloatOperatorFoldHelper with a + b.
  • TorchToArith lowering (TorchToArith.cpp): Maps AtenAddFloatOp to arith::AddFOp via ConvertAtenBinaryOp, and marks it illegal on the conversion target.
  • ODS generation (torch_ods_gen.py): Added aten::add.float emit entry with has_folder=True.

Register the `aten::add.float : (float, float) -> (float)` op in the
Torch dialect. Implement constant folding via `atenBinaryFloatOperatorFoldHelper`
and lower to `arith.addf` in the TorchToArith conversion.
@BruceXinXin BruceXinXin marked this pull request as ready for review March 12, 2026 01:00
@BruceXinXin
Copy link
Author

Hi @sjarus, I don't have permission to add reviewer. Do you know who is the best person I can request for review? Thanks you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant